15 research outputs found

    Creating and augmenting keyboards for extended reality with the Keyboard Augmentation Toolkit

    Get PDF
    This article discusses the Keyboard Augmentation Toolkit (KAT), which supports the creation of virtual keyboards that can be used both for standalone input (e.g., for mid-air text entry) and to augment physically tracked keyboards/surfaces in mixed reality. In a user study, we firstly examine the impact and pitfalls of visualising shortcuts on a tracked physical keyboard, exploring the utility of virtual per-keycap displays. Supported by this and other recent developments in XR keyboard research, we then describe the design, development, and evaluation-by-demonstration of KAT. KAT simplifies the creation of virtual keyboards (optionally bound to a tracked physical keyboard) that support enhanced display —2D/3D per-key content that conforms to the virtual key bounds; enhanced interactivity —supporting extensible per-key states such as tap, dwell, touch, swipe; flexible keyboard mappings that can encapsulate groups of interaction and display elements, e.g., enabling application-dependent interactions; and flexible layouts —allowing the virtual keyboard to merge with and augment a physical keyboard, or switch to an alternate layout (e.g., mid-air) based on need. Through these features, KAT will assist researchers in the prototyping, creation and replication of XR keyboard experiences, fundamentally altering the keyboard’s form and function

    Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality

    Get PDF
    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences

    The Critical Role of Self-Contact for Embodiment in Virtual Reality

    No full text
    With the broad range of motion capture devices available on the market, it is now commonplace to directly control the limb movement of an avatar during immersion in a virtual environment. Here, we study how the subjective experience of embodying a full-body controlled avatar is influenced by motor alteration and self-contact mismatches. Self-contact is in particular a strong source of passive haptic feedback and we assume it to bring a clear benefit in terms of embodiment. For evaluating this hypothesis, we experimentally manipulate self-contacts and the virtual hand displacement relatively to the body. We introduce these body posture transformations to experimentally reproduce the imperfect or incorrect mapping between real and virtual bodies, with the goal of quantifying the limits of acceptance for distorted mapping on the reported body ownership and agency. We first describe how we exploit egocentric coordinate representations to perform a motion capture ensuring that real and virtual hands coincide whenever the real hand is in contact with the body. Then, we present a pilot study that focuses on quantifying our sensitivity to visuo-tactile mismatches. The results are then used to design our main study with two factors, offset (for self-contact) and amplitude (for movement amplification). Our main result shows that subjects' embodiment remains important, even when an artificially amplified movement of the hand was performed, but provided that correct self-contacts are ensured

    Post-experiment responses for the VMT group.

    No full text
    <p>Values represent the total count of responses in favor of each perspective condition. Most subjects preferred to use 1PP, and felt safer in 3PP. When asked about conditions, subjects thought ALT to be more efficient in the reaching task. ALT was also preferred by more subjects than the other conditions.</p

    Perspective conditions.

    No full text
    <p>The subject could experience the scene in three different conditions: (1PP) first person perspective; (3PP) third person perspective; or (ALT) be free to alternate between 1PP and 3PP. When in the alternate condition, subject were asked to perform at least 3 perspective switches.</p

    GSR variation time locked to the floor fall event (response in microsiemens).

    No full text
    <p>(left) The green and red shaded areas highlight the time interval used to compute the median GSR preceding (5 to 0 seconds before) and following (1 to 6 seconds after) the floor fall event for each subject. Each line color represents the GSR recording of one subject. The threat caused a statistically significant increase in the GSR response for all 6 combinations of conditions. (right) The difference between the medians is used to indicate the per subject GSR change linked to the threat. A significant difference between 1PP and 3PP was observed.</p

    Questionnaire results: Self-location and threat responses for the main effect of <i>perspective</i> and <i>multisensory congruence</i>.

    No full text
    <p>Error bars represent the confidence interval of the mean (CI). “*”, “**” and “***” indicate <i>p</i> < .05, <i>p</i> < .01 and <i>p</i> < .001 respectively.</p

    Breakdown of the proportion of time spent in 1PP for each stage of the ALT session for VMT and ¬VMT.

    No full text
    <p>Subjects tended to make a balanced use of perspectives in the REACH stage, while favoring 1PP for the following stages. Notably, overall perspective choice has shifted to 1PP once the reaching task was complete. 1PP seems to be preferred by the VMT group when they had to complete the walking task. This was not the case for the ¬VMT group, who had no practical incentive to change perspective at this stage of the session as the task is completed regardless of their actions. The WALK stage was the only one to present a statistically significant difference between the groups, as analyzed with pairwise t-tests (<i>t</i><sub>35</sub> = 2.88, <i>p</i> < .01).</p

    Overview of the session stages.

    No full text
    <p>(a) First the subject has to reach for targets that can appear either in the air or in the floor (REACH stage); (b) a final target invites the subject to walk to the wood platform (WALK); (c) once on the platform, the subject is asked to feel the edges with their feet (WAIT); (d) finally, the wooden floor beneath the platform collapses, revealing the pit to the subject (OBSERVE). Subjects in the ¬VMT group do not perform these task, instead they watch recordings from the VMT group. The session was followed by the mental ball drop (MBD) task and an embodiment questionnaire.</p

    Questionnaire results: Senses of agency and body ownership for the interaction between <i>perspective</i> and <i>multisensory congruence</i>.

    No full text
    <p>Error bars represent the confidence interval of the mean (CI). “*”, “**” and “***” indicate <i>p</i> < .05, <i>p</i> < .01 and <i>p</i> < .001 respectively.</p
    corecore